**Title:** Efficient Numerical Methods for High-Dimensional Approximation Problems

**Abstract:**

In
the field of uncertainty quantification, the effects of parameter
uncertainties on scientific simulations may be studied by integrating or
approximating a quantity of interest as a function over the parameter
space. If this is done numerically, using regular grids with a fixed
resolution, the required computational work increases exponentially with
respect to the number of uncertain parameters -- a phenomenon known as
the curse of dimensionality.
We study two methods that can help break this curse: discrete least
squares polynomial approximation and kernel-based approximation. For the
former, we adaptively determine sparse polynomial bases and use
evaluations in random, quasi-optimally distributed evaluation nodes; for
the latter, we use evaluations in sparse grids, as introduced
by Smolyak.
To mitigate the additional cost of solving differential equations at
each evaluation node, we extend multilevel methods to the approximation
of response surfaces. For this purpose, we provide a general analysis
that exhibits multilevel algorithms as special cases of an abstract
version of Smolyak's algorithm.
In financial mathematics, high-dimensional approximation problems occur
in the pricing of derivatives with multiple underlying assets.
The value function of American options can theoretically be determined
backwards in time using the dynamic programming principle.
Numerical implementations, however, face the curse of dimensionality
because each asset corresponds to a dimension in the domain of the value
function. Lack of regularity of the value function at the optimal
exercise boundary further increases the computational complexity.
As an alternative, we propose a novel method that determines an optimal
exercise strategy as the solution of a stochastic optimization problem
and subsequently computes the option value by simple Monte Carlo
simulation. For this purpose, we represent the American option price as
the supremum of the expected payoff over a set of randomized exercise
strategies. Unlike the corresponding classical representation over
subsets of Euclidean space, this relaxation gives rise to a well-behaved
objective function that can be globally optimized using standard
optimization routines.

** **

**
****Biography:**

Soeren Wolfers is a PhD
candidate in the Applied Mathematics and Computational Science (AMCS)
program and a member of the stochastic numerics group at KAUST,
supervised by Professor Raul Tempone. He holds a BSc in Mathematics from
Heidelberg University and an MSc in Mathematics from the University of
Bonn. His research interests include uncertainty quantification,
stochastic simulations, optimal control, and mathematical finance.

**Venue and time****:**** **Wednesday , 06/02/2019. Time:10:00 AM. Building 4, Level 5, Room: 5220